Introduction to Open Data Science - Course Project

About the project

The link to my GitHub repository is https://github.com/statnaia/IODS-project
The link to GitHub Pages is https://statnaia.github.io/IODS-project/


Regression and model validation

Dataset: JYTOPKYS3
- The dataset is an international survey of Approaches to Learning done by Kimmo Vehkalahti in 2014-2015
- The dataset learning2014 consist 166 rows and 7 variables.

#reading the dataset
learning2014 <- read.csv("D:/Desktop/Courses/Data Science/IODS-project/Data/learning2014.csv", sep=" ", header=TRUE)

#checking the structure and dimensions of the dataset
str(learning2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
dim(learning2014)
## [1] 166   7

First we explore the data by constructing scatter plots, PDFs and correlations between the variables by gender. Pink color denotes the information on female participants of the survey, and cyan color denotes the information on males. Number of females is approximately twice larger than the number of males. Most of the respondents are under age 35-40. The boxplots and PDFs that denote Global attitude toward statistics, Deep approach, Surface approach, Strategic approach and Total points look quite similar for both genders. Overall males have somewhat higher scores for attitude that females and vice versa for Surface approach. Surface approach scores are negatively correlated with all other variables.

The correlations between variables are in general quite low, and non-significant in many cases. This also can be seen from the scatter plots, the relationships between variables seem mostly quite random. Surface approach scores are negatively correlated with all other variables, but are significant only for males when correlated with Deep approach and Global attitude toward statistics. On the other hand, variables Global attitude toward statistics and Total points are significantly positively correlated with each other for both males and females.

# access the GGally and ggplot2 libraries
#install.packages("ggplot2")
#install.packages("dplyr")

library(ggplot2)
library(GGally)
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
# create a more advanced plot matrix with ggpairs()
p <- ggpairs(learning2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))

# draw the plot
p

Having studied the relationships between variables, it seems that the Global attitude toward statistics might explain the variation in Total points the best. Nevertheless, the use of the two other variables: Strategic approach and Surface approach might improve the model. A summary of a multiple linear regression model is shown below.

# creating a multiple regression model with attitude, strategic learning, and surface learning as explanatory variables
# target variable is Points
my_model <- lm(Points ~ attitude + stra + surf, data = learning2014)

# print out a summary of the model
summary(my_model)
## 
## Call:
## lm(formula = Points ~ attitude + stra + surf, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

As the statistical significance is marked by stars (t value and Pr(>|t|) columns), Global attitude toward statistics is in fact significantly positively correlated with exam points, but the other two variables are not. The p-values for these two variables are greater than the .05 value, which is generally accepted to test the significance. Therefore, the null hypothesis is not rejected.

Based on these results, I remove these two parameters and make a new model:

#remove stra and surf and run model again
my_model2 <- lm(Points ~ attitude, data = learning2014)

# print out a summary of the model
summary(my_model2)
## 
## Call:
## lm(formula = Points ~ attitude, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.6372     1.8303   6.358 1.95e-09 ***
## attitude      3.5255     0.5674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

The simple linear model performs better than the multimpe regression model in our case. The residuals for this model are slightly smaller than for the previous model, indicating a better model fit. The model fit is described the value of the Multiple R squared: 0.19, indicating that the model can explain 19 percent of the variance in our dependent variable. In the case of this simple linear regression, this means that differences in attitude explain about a fifth of the variance in exam points.

From the “Residuals vs Fitted” plot we can see, that the relationship between the residuals and the fitted values is quite random, which indicates that the size of the errors is not dependent on the explanatory variable. In the “Normal Q-Q” plot we see that the errors are reasonably normally distributed, and thus fit the normality assumption, and the results in the “Residuals vs Leverage” plot imply, that no single observation has unusually high impact on the model. The model diagnostics show a reasonably good fit to the data.

# draw diagnostic plots
par(mfrow = c(2,2))
plot(my_model2, which = c(1,2,5))


Logistic regression

Alcohol consumption dataset

The dataset is based on questionnaires on student achievement in secondary education in two Portuguese schools. The data attributes include student grades, demographic, social and school related features.

The data was combined from two datasets: the dataset that describes students performance in Mathematics and the dataset that describes students performance in the Portuguese language. The alcohol consumption by each student is measured with the variable “alc_use” and high alcohol consumption with the variable “high_use”. If the alcohol consumption has a value more than 2, “high_use” of alcohol is True.

alc <- read.csv("./Data/alc.csv")
colnames(alc)
##  [1] "X"          "school"     "sex"        "age"        "address"   
##  [6] "famsize"    "Pstatus"    "Medu"       "Fedu"       "Mjob"      
## [11] "Fjob"       "reason"     "guardian"   "traveltime" "studytime" 
## [16] "schoolsup"  "famsup"     "activities" "nursery"    "higher"    
## [21] "internet"   "romantic"   "famrel"     "freetime"   "goout"     
## [26] "Dalc"       "Walc"       "health"     "alc_use"    "high_use"  
## [31] "failures"   "paid"       "absences"   "G1"         "G2"        
## [36] "G3"

Relationships between alcohol consumption and other variables

Here I study the relationships between high/low alcohol consumption and some of the other variables in the dataset. I choose to study the relationships between alcohol consumption (high_use) and such variables as age, famrel, higher, goout (going out with friends).

My hypotheses for each of them are following:

  1. Older students are more likely to consume more alcohol.
  2. Having good (from 3 to 5) family relationships leads to lower alcohol consumption.
  3. Wish to get a higher education leads to lower alcohol consumption.
  4. Going out a lot may lead to drinking more alcohol.

Age

Age varies between 15-22 years for men and women, mean age is 16.5. Male students who have high alcohol consumption tend to be roughly one year older (mean age 17) than those who have lower alcohol consumption (mean age 16). The situation is vice versa for women. My hypothesis was partially correct, only for men.

The barplot highlights increasing of alcohol consumption from age 15 to 17 for women and high consumption at ages 15-18 for men.

library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
summary(alc$age)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   15.00   16.00   17.00   16.58   17.00   22.00
g1 <- ggplot(alc, aes(x = high_use, y = age, col = sex))
g1 + geom_boxplot() + ylab("age") +ggtitle("Student age by high alcohol use and sex")

g2 <- ggplot(data = alc, aes(x = age, fill=high_use))
g2 + geom_bar() + facet_wrap("sex")

Family realtionships

Quality of family relationships varies between 1-5, mean value is 4. It is clearly seen that the quality of relationship within family influences the amount of alcohol consumption. So my hypothesis was correct. From the barplot it can be seen that the family microclimate is somewhat more important for females.

summary(alc$famrel)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.000   4.000   4.000   3.935   5.000   5.000
g3 <- ggplot(alc, aes(x = high_use, y = famrel, col = sex))
g3 + geom_boxplot() + ylab("Quality of relationship")+ggtitle("Family realtionships")

g4 <- ggplot(data = alc, aes(x = famrel, fill=sex))
g4 + geom_bar() + facet_wrap("high_use")

Wish to take higher education

Dedicated students tend to consume less alcohol, so the hypothesis was partly right. Nevertheless, the amount of students consuming a lot of alcohol and wanting to enter universities at the same time is surprisingly high, especially for males. So, almost every student wants to get a higher education. Note: there are no females that consume a lot of alcohol and do not want to get higher education.

summary(alc$freetime)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.000   3.000   3.000   3.224   4.000   5.000
g7 <- ggplot(alc, aes(x = high_use, y = higher, col = sex))
g7 + geom_boxplot() + ylab("higher education")+ggtitle("Student wants to take higher education")

g8 <- ggplot(data = alc, aes(x = higher, fill=sex))
g8 + geom_bar() + facet_wrap("high_use")

Going out with friends

High usage drinkers are more likely to go out than low usage for both genders. This observation is more prominent for males. So, the hypothesis was right.

summary(alc$freetime)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.000   3.000   3.000   3.224   4.000   5.000
g7 <- ggplot(alc, aes(x = high_use, y = goout, col = sex))
g7 + geom_boxplot() + ylab("going out")+ggtitle("Student goes out with friends")

g8 <- ggplot(data = alc, aes(x = goout, fill=sex))
g8 + geom_bar() + facet_wrap("high_use")

Logistic regression

#Fitting a logistic regression model 
model1 <- glm(high_use ~ age + famrel + higher + goout, data = alc, family = "binomial")
#Printing out a summary of the model
summary(model1)
## 
## Call:
## glm(formula = high_use ~ age + famrel + higher + goout, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.7714  -0.7794  -0.5518   0.9972   2.4016  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -2.44699    2.10067  -1.165  0.24407    
## age          0.08397    0.11135   0.754  0.45079    
## famrel      -0.39393    0.13599  -2.897  0.00377 ** 
## higheryes   -0.83261    0.60855  -1.368  0.17125    
## goout        0.76934    0.12021   6.400 1.56e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 452.04  on 369  degrees of freedom
## Residual deviance: 391.47  on 365  degrees of freedom
## AIC: 401.47
## 
## Number of Fisher Scoring iterations: 4

The logistic regression shows the statistical relationship between the explanatory variables and the binary high/low alcohol consumption variable.

The summary of the model shows that age and with to take higher education are not statistically significant. But family relationships and going out with friend are significant. Thus, having bad family relationships and going out a lot with friends all increase alcohol consumption.

The factor variable in the model (higher) here shows how with to have higher education affects alcohol consumption. It means a Wald test was performed to test whether the pairwise difference between the coefficients of males and females is different from zero or not. Here it is not significantly different, because practically all students want a higher degree as we saw earlier.

Odds ratios (ORs) and confidence intervals (CIs)

OR <- coef(model1) %>% exp
CI <- confint(model1) %>% exp
## Waiting for profiling to be done...
#Printing out the odds ratios with their confidence intervals
cbind(OR, CI)
##                     OR       2.5 %    97.5 %
## (Intercept) 0.08655379 0.001375242 5.3046944
## age         1.08759925 0.874151223 1.3541589
## famrel      0.67440091 0.514884010 0.8792122
## higheryes   0.43491074 0.127926535 1.4297848
## goout       2.15834687 1.716083835 2.7521056

The OR for age and higher education are not significant because the confidence intervals contain number 1. Otherwise good family relationship is associated with decreased alcohol use.

The odds of high alcohol consumption for significant variables: 1. students with a good family situation are less likely to drink a lot (as OR<1 -> Exposure associated with lower odds of outcome)
2. students who spend a lot of time with their friends is from 1.7 to 2.8 times higher for students who do not (as OR>1 -> Exposure associated with higher odds of outcome)

Predictive power of the model

#I will explore the predictive power of the model by using only the variables with a statistically significant relationship with alcohol consumption. 

#Predicting the probability of high_use
probabilities <- predict(model1, type = "response")

#Adding the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)

# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)
# see the last ten original classes, predicted probabilities, and class predictions
select(alc, failures, absences, sex, high_use, probability, prediction) %>% tail(10)  
##     failures absences sex high_use probability prediction
## 361        1        7   M     TRUE  0.19314246      FALSE
## 362        0        3   M     TRUE  0.44938048      FALSE
## 363        0        2   M     TRUE  0.07079942      FALSE
## 364        0        4   M     TRUE  0.53182804       TRUE
## 365        0        3   M    FALSE  0.19604475      FALSE
## 366        0        4   M     TRUE  0.44938048      FALSE
## 367        0        0   M    FALSE  0.14122760      FALSE
## 368        1        4   M     TRUE  0.45450785      FALSE
## 369        2        8   M     TRUE  0.35976051      FALSE
## 370        1        0   M    FALSE  0.15172201      FALSE
# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   238   21
##    TRUE     74   37
# graphic visualizing of both the actual values and the predictions
g <- ggplot(alc, aes(x = probability, y = high_use,col=prediction))
g + geom_point()

The model does not do a perfect job with predicting alcohol consumption, as it predicts wrongly approximately every 4th time. Next we compute the total proportion of inaccurately classified individuals (the training error):

#Tabulating the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.64324324 0.05675676 0.70000000
##    TRUE  0.20000000 0.10000000 0.30000000
##    Sum   0.84324324 0.15675676 1.00000000

As can be seen from plot and from the prediction table, category FALSE compose 70% of the high_use (0.7) and TRUE 30% (0.3). Nevertheless, the model still does a better job than simple guessing.

Loss function

# define a loss function (average prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
# the average number of wrong predictions in the alc data
loss_func(class = alc$high_use, prob = 0)
## [1] 0.3
loss_func(class = alc$high_use, prob = 1)
## [1] 0.7
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2567568

The results are in agreement with the previous analysis. The output numbers denote the average number of wrong predictions in the training data. If I define the probability of high_use as zero for each individual, it results in resulting proportion =0.3. The result for probability of high_use=1 is complementary to that. The definition of probability as to the model output gives 0.26 error, which is better than in the first case.

Bonus exercise

10-fold cross-validation on the model

# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = model1, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2837838

With a prediction error of 0.27 with the test set, my model performs slightly worse than the model introduced in the DataCamp exercise (with an error of ~0.26). Below are two models that performs better.

Super - Bonus exercise

First make a model with a lot of predictors

model2 <- glm(high_use ~ school + sex + age + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = model2, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2378378

Using a model with many predictors is not useful since the error rate is higher than for the model with less predictors.

Next model:

my_model5 <- glm(high_use ~ sex + age + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model5, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2351351

The error rate gets smaller when reducing the predictors.

Next model:

my_model6 <- glm(high_use ~ studytime + famsup + activities + goout + absences, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model6, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2405405

The error rate gets smaller when reducing the predictors, especially getting rid of those that have no correlation to high_usage.


Clustering and classification

Dataset description

The Boston dataset is loaded from the MASS package of R.
This dataset contains Housing values in the suburbs of Boston and has 506 observations and 14 variables, 2 of them are interval and the other ones are numerical.
Description of the dataset can be found here: https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html

Variables of the dataset are:
1. ‘crim’ (per capita crime rate by town)
2. ‘zn’ (proportion of residential land zoned for lots over 25,000 sq.ft)
3. ‘indus’ (proportion of non-retail business acres per town)
4. ‘chas’ (Charles River dummy variable (= 1 if tract bounds river; 0 otherwise))
5. ‘nox’ (nitrogen oxides concentration (parts per 10 million))
6. ‘rm’ (average number of rooms per dwelling)
7. ‘age’ (proportion of owner-occupied units built prior to 1940)
8. ‘dis’ (weighted mean of distances to five Boston employment centres)
9. ‘rad’ (index of accessibility to radial highways)
10. ‘tax’ (full-value property-tax rate per $10,000)
11. ‘ptratio’ (pupil-teacher ratio by town)
12. ‘black’ (1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town)
13. ‘lstat’ (lower status of the population (percent))
14. ‘medv’ (median value of owner-occupied homes in $1000s)

library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
library(dplyr)
data(Boston)

# explore the dataset: dimensions, structure and summary
dim(Boston)
## [1] 506  14
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Summary shows the min, max, and the first, the second (median), and the third quantum of each variable of the dataset.
The dataset has 506 rows and 14 columns.
The variables have very different ranges and they are not comparable with each other, which probably means that standardization is required before the analysis.

Graphical overview of the dataset:

# plot matrix of the variables
pairs(Boston)

The overview is a bit messy but it offers visual information on how the variables are connected to each other: e.g. there is a hyperbolic relationship between ‘nox’ and ‘dis’, between ‘lstat’ and ‘medv’; almost a linear correlation between ‘rm’ nad ‘lstat’.

library(tidyr)
library(corrplot)
## Warning: package 'corrplot' was built under R version 4.1.2
## corrplot 0.92 loaded
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) %>% round(digits = 2)

# print the correlation matrix
cor_matrix
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax ptratio
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58    0.29
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31   -0.39
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72    0.38
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04   -0.12
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67    0.19
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29   -0.36
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51    0.26
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53   -0.23
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91    0.46
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00    0.46
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46    1.00
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44   -0.18
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54    0.37
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47   -0.51
##         black lstat  medv
## crim    -0.39  0.46 -0.39
## zn       0.18 -0.41  0.36
## indus   -0.36  0.60 -0.48
## chas     0.05 -0.05  0.18
## nox     -0.38  0.59 -0.43
## rm       0.13 -0.61  0.70
## age     -0.27  0.60 -0.38
## dis      0.29 -0.50  0.25
## rad     -0.44  0.49 -0.38
## tax     -0.44  0.54 -0.47
## ptratio -0.18  0.37 -0.51
## black    1.00 -0.37  0.33
## lstat   -0.37  1.00 -0.74
## medv     0.33 -0.74  1.00
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)

Red dots in the correlation plot denote negative correlations and blue dots - positive ones. The bigger the circle is, the darker the color of the circle, the stronger the correlation between two variables is.

There is a quite strong correlation between the ‘nox’ parameter (nitrogen oxides concentration)and such parameters as ‘age’ (proportion of owner-occupied units built prior to 1940), ‘dis’ (weighted mean of distances to five Boston employment centres), ‘rad’ (index of accessibility to radial highways), ‘tax’ (full-value property-tax rate per $10,000) and ‘lstat’ (lower status of the population (percent)). The nitrogen oxides concentration is positively correlated with the amount of older buildings, proximity of the highways, higher taxes and population welfare. On the other hand, the more the concentration of nitrogen oxides, the less the weighted mean of distances to employment centers (negative correlation).

Next, there is also a relationship between ‘lstat’ and ‘medv’ (median value of owner-occupied homes in $1000s) variables: the lower the status of the population, the less the median cost of the homes in the area, which can be expected. Same logic can be applied to the ‘lstat’ and ‘medv’ relationships with ‘rm’ (average number of rooms per dwelling): the more rooms in the dwelling, the more the median cost of the homes and the less low-income families can afford this dwelling.

Furthermore, ‘rad’ variable is positively correlated with ‘tax’, which means that more tax is applied to those who live closer to the radial highways.

Lastly, there are rather strong correlations between ‘indus’ (proportion of non-retail business acres per town) and ‘nox’, ‘age’, ‘dis’ and ‘tax’. The more industry there is in the town, the more air pollution it produces, the older are the buildings, the less is the distance to these industrial centers and the higher is the tax.

Based on this analysis, it is safe to say that the variables of the dataset are mostly related to each other and it is possible to build a prediction model using the interplay between parameters.

library(GGally)
library(ggplot2)
p <- ggpairs(Boston, lower = list(combo = wrap("facethist", bins = 20))) 
p

Only ‘rm’ variable looks like it’s almost normally distributed. Other variables are not distributed normally and have different dimensions.
Therefore, the dataset needs to be scaled.

Dataset standardization

For the reasons stated above we need to scale the dataset first.

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
# change the object to data frame so that it will be easier to use the data
boston_scaled <- as.data.frame(boston_scaled)
class(boston_scaled)
## [1] "data.frame"

The scale (min and max) has changed for all the variables. The means of the variables now is zero.

Now we need to create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate) using quantiles as the break points.

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110

The min value is -0.42 and the max value is 9.92. The 1. quantile is -0.41, the second is -0.39 and the third is 0.007.

# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610

These are the limits for each category.

# create a categorical variable 'crime'
labels <- c("low", "med_low", "med_high", "high")
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=labels)

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127

127 values have been fall into the first and the last category, 126 elements fall into the second and the third. Values between -0.419 and -0.411 are in category ‘low’. Values between -0.411 and -0.39 are in category ‘med_low’. Values between -0.39 and 0.00739 are in category ‘med_high’. Values between 0.00739 and 9.92 are in category ‘high’.

# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

Here we removed the original variable (crim) from the scaled dataset and added the new categorized variable (crime) to the dataset.

The dataset is prepared now and we can divide the data into training (80%) and testing (20%) sets.

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]
dim(train)
## [1] 404  14
# create test set 
test <- boston_scaled[-ind,]
dim(test)
## [1] 102  14

Train dataset has 404 rows and 14 columns. Test dataset has 102 rows and 14 columns.

Linear Discriminant analysis (LDA)

Let’s train a Linear Discriminant analysis (LDA) classification model. Categorical crime rate is the target variable and all the other variables in the dataset as predictor variables.

lda.fit <- lda(crime ~ ., data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2549505 0.2301980 0.2475248 0.2673267 
## 
## Group means:
##                  zn      indus        chas        nox         rm        age
## low       0.9272890 -0.8799092 -0.08120770 -0.8492209  0.4871199 -0.8413711
## med_low  -0.1287781 -0.2849166 -0.06065701 -0.5535816 -0.1377033 -0.3324743
## med_high -0.3920529  0.1944829  0.20012296  0.4372278  0.0409122  0.4569669
## high     -0.4872402  1.0169921 -0.01714665  1.0491534 -0.4199315  0.8062660
##                 dis        rad        tax    ptratio       black       lstat
## low       0.7982590 -0.6852768 -0.7457910 -0.4610980  0.37690576 -0.77126157
## med_low   0.3091095 -0.5508873 -0.4979635 -0.1165423  0.35114331 -0.15540125
## med_high -0.3795806 -0.4179739 -0.3205060 -0.2732326  0.08618194  0.08804528
## high     -0.8528578  1.6393984  1.5149640  0.7822555 -0.85401699  0.87643777
##                  medv
## low       0.546943954
## med_low  -0.006139132
## med_high  0.121798521
## high     -0.671551043
## 
## Coefficients of linear discriminants:
##                  LD1          LD2         LD3
## zn       0.095085437  0.716826226 -0.97019557
## indus    0.022560702 -0.238104370  0.35011479
## chas    -0.087760351 -0.038031484  0.03713456
## nox      0.345539155 -0.699867325 -1.41186443
## rm      -0.123465165 -0.080201792 -0.18044173
## age      0.288946755 -0.389562657 -0.18346696
## dis     -0.057257040 -0.328977645  0.04985308
## rad      3.174698074  0.944708717  0.02815220
## tax      0.003019375 -0.021726043  0.54053635
## ptratio  0.137880964  0.007597424 -0.44923191
## black   -0.134187800  0.024945310  0.18683483
## lstat    0.184896322 -0.313083591  0.23500299
## medv     0.190998713 -0.428128979 -0.28226311
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9494 0.0380 0.0125

Prior probabilities of groups: the proportion of training observations in each group. The observations are more or less equally distributed to all the groups (all in the range of 23%-27%, as the numbers change every time we run the analysis and choose randomly the 80% of training data).

Group means denote group center of gravity, the mean of each variable in each group.

Coefficients of linear discriminants are used to form the linear combination of predictor variables that are further used to form the LDA decision rule (LDA provides the coefficient of a linear combination of variables). Proportion of trace is the percentage achieved by each discriminant function.

LD1 seems to be 95.75% whereas the other LDs are not very high, suggesting that the first LDA explains almost all the variability in the dataset.

Next we draw the LDA biplot. The color in the biplot indicates each cluster.

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

From the plot we can see again that accessibility to radial highways (rad) has the highest LD1 coefficient.

Predicting the test data

To make the prediction of crime rate we will take the crime classes from the test and save them as correct_classes (so that we can compare to it when testing) and remove the crime variable from the test dataset.

# save the correct classes from test data
correct_classes <- test$crime
class(correct_classes)
## [1] "factor"
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
colnames(test)
##  [1] "zn"      "indus"   "chas"    "nox"     "rm"      "age"     "dis"    
##  [8] "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"

There is no longer crime variable in the test dataset.

Next we predict the crime rate and compare the predictions to the correct_classes.

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       16       8        0    0
##   med_low    7      17        9    0
##   med_high   1       4       20    1
##   high       0       0        0   19

The predictions of the model are fairly good, the correct predictions numbers for each category which is situated on the diagonal of the table are the highest numbers across the table.

K-means clustering

Next we load again the Boston dataset and scale it to get comparable distances.

# load the Boston dataset, scale it and create the euclidean distance matrix
library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled, method = "euclidean", diag = FALSE, upper = FALSE, p = 4)
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970

Euclidean is simple the geometric distance between two points, while Manhattan distance observes the absolute differences between the coordinates of two points.

Let’s calculate the manhattan distance.

dist_man <- dist(boston_scaled, method = "manhattan", diag = FALSE, upper = FALSE, p = 4)
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618

Next, we run the k-means algorithm on the dataset. K-means clustering algorithm is an unsupervised method, that assigns observations to groups or clusters based on similarity of the objects. K-means needs the number of clusters as an argument, and the optimal number of clusters needs to be defined. First we run K-means clustering using 8 clusters, each identified by a different color. The plot looks very colourful, but is is obvious that the number of clusters is too big.

# k-means clustering
km <-kmeans(Boston, centers = 8)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)

K-means needs the number of clusters as an argument, and the optimal number of clusters needs to be defined. One way to determine the number of clusters is to look at how the total of within cluster sum of squares (WCSS) behaves when the number of cluster changes. The optimal number of clusters is when the total WCSS drops radically.

# MASS, ggplot2 and Boston dataset are available
set.seed(123)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})

# visualize the results
library(ggplot2)
qplot(x = 1:k_max, y = twcss, geom = 'line')

It looks like 2 is the optimal number of clusters since the curve changes dramatically on k=2.

Therefore, we will run the k-means analysis with only 2 centroids.

# k-means clustering
km <-kmeans(Boston, centers = 2)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)

So, the optimal number of clusters is 2. More than 2 clusters is abundant. Lets zoom in to have a better look for the analysis.

# zoom in to specific columns
pairs(Boston[1:5], col = km$cluster)

# zoom in to specific columns
pairs(Boston[6:10], col = km$cluster)

# zoom in to specific columns
pairs(Boston[10:14], col = km$cluster)

We can see that clusters denoted by red and black color are distinguishable from each other, which supoorts the idea of having two optimal clusters.

Our previous conclusions based on correlations can be observed here too. The distributions for such pairs as ‘indus’ and ‘nox’, ‘lstat’ and ‘medv’, ‘medv’ and ‘rm’, ‘rm’ and ‘lstat’, ‘dis’ and ‘nox’ prove to be have linear or hyperbolic relationships.

Bonus: k-means on the original Boston data

library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
boston_scaled <- dplyr::select(boston_scaled, -crim)
n <- 506
ind <- sample(n,  size = n * 0.8)
ktrain <- boston_scaled[ind,]
ktest <- boston_scaled[-ind,]
km <-kmeans(ktrain, centers = 4)
#length(km)
lda.fit <- lda(km$cluster ~ . , data = ktrain)
lda.fit
## Call:
## lda(km$cluster ~ ., data = ktrain)
## 
## Prior probabilities of groups:
##         1         2         3         4 
## 0.2227723 0.2747525 0.3811881 0.1212871 
## 
## Group means:
##           zn      indus        chas        nox         rm        age        dis
## 1 -0.4812850  0.5190046  0.16512651  0.4636541 -0.5324236  0.6083763 -0.5683231
## 2 -0.4872402  1.0403131 -0.02404347  1.0527448 -0.3806158  0.7760818 -0.8247049
## 3 -0.1241766 -0.6522457  0.06002355 -0.5753066  0.4028566 -0.3996920  0.3579442
## 4  2.2643344 -1.1492006 -0.27232907 -1.1976137  0.6718297 -1.4468053  1.6575639
##          rad        tax     ptratio       black      lstat       medv
## 1 -0.5926685 -0.2923421  0.05697847  0.04711926  0.4630253 -0.4278552
## 2  1.6182167  1.5342238  0.80494617 -0.81318997  0.8336384 -0.7146212
## 3 -0.5523147 -0.7449329 -0.40267404  0.36555371 -0.5852032  0.4973671
## 4 -0.6865511 -0.5704092 -0.76752788  0.34774570 -0.9327205  0.7233699
## 
## Coefficients of linear discriminants:
##                 LD1          LD2         LD3
## zn       0.10622983  1.653335114  1.24563338
## indus    0.47940634 -0.167290290  0.50001998
## chas    -0.01827653 -0.110962089  0.03150472
## nox     -0.06167804 -0.048162595  0.66089368
## rm      -0.05933667  0.159892918 -0.18545570
## age      0.10134636 -0.430198895  0.19061438
## dis     -0.44615071  0.572352328 -0.21526290
## rad      3.58772420  0.718037376 -2.12731486
## tax      1.07739294  0.878746401  1.01263901
## ptratio  0.37148164 -0.070944851  0.57158684
## black   -0.03356943 -0.003761048 -0.08426066
## lstat    0.21415982  0.003520682  0.26131387
## medv    -0.06573643  0.162441226  0.03495705
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.8508 0.1216 0.0276
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

In the plot we can see thebiplot for the LDA analysis using the clusters as target classes. The ‘rad’ variable is the most influencial linear separator for the clusters again. Also, ‘zn’ and ‘tax’ are next most influential variables.

Super-bonus: 3D plots

Next, we create a matrix product, which is a projection of the data points and make a 3D plot of the columns of the matrix product.

model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

# create 3D plot of the columns of the matrix product 
library(plotly)
## Warning: package 'plotly' was built under R version 4.1.2
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')

Now we create 3D plot and color it by the crime variable of the test dataset.

# create 3D plot of the columns of the matrix product 
library(plotly)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= train$crime)

Finally, we create 3D plot and color it by the clusters of the k-means.

# 3D plot by k means cluster
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= km$cluster)

The plots above differ only by coloring which highlights specific features. The first plot shows the 3D distribution of the three LDs, colored by the level of crimes. It can be seen that the ‘high’ crime rate is the most defined group that stands further away than most of the points belonging to other categories. The second plot shows the same 3D distribution color coded based on what cluster they belong to. There is no standalone group as in previous plot, the datapoints belong to different clusters without any clear pattern.


(more chapters to be added similarly as we proceed with the course!)